Hint-based cache design for reducing miss penalty in HBS packet classification algorithm

نویسندگان

  • Yeim-Kuan Chang
  • Fang-Chen Kuo
چکیده

In this paper, we implement some notable hierarchical or decision-tree-based packet classification algorithms such as extended grid of tries (EGT), hierarchical intelligent cuttings (HiCuts), HyperCuts, and hierarchical binary search (HBS) on an IXP2400 network processor. By using all six of the available processing microengines (MEs), we find that none of these existing packet classification algorithms achieve the line speed of OC-48 provided by IXP2400. To improve the search speed of these packet classification algorithms, we propose the use of software cache designs to take advantage of the temporal locality of the packets because IXP network processors have no built-in caches for fast path processing in MEs. Furthermore, we propose hint-based cache designs to reduce the search duration of the packet classification data structure when cache misses occur. Both the header and prefix caches are studied. Although the proposed cache schemes are designed for all the dimension-by-dimension packet classification schemes, they are, nonetheless, the most suitable for HBS. Our performance simulations show that the HBS enhancedwith the proposed cache schemes performs the best in terms of classification speed and number of memory accesses when the memory requirement is in the same range as those of HiCuts and HyperCuts. Based on the experiments with all the high and low locality packet traces, five MEs are sufficient for the proposed rule cache with hints to achieve the line speed of OC-48 provided by IXP2400. © 2013 Elsevier Inc. All rights reserved.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Reducing the Read-Miss Penalty for Flat COMA Protocols

In #at cache-only memory architectures (COMA), an attraction-memory miss must "rst interrogate a directory before a copy of the requested data can be located, which often involves three network traversals. By keeping track of the identity of a potential holder of the copy—called a hint— one network traversal can be saved which reduces the read penalty. We have evaluated the reduction of the rea...

متن کامل

EE 8365 , Advanced Computer Architecture , Spring 2000 ORL – Modified L 2 Cache Replacement Algorithm

A limit to computer system performance is the miss penalty for fetching data and instructions from lower levels in the memory hierarchy. There are two approaches to reducing this penalty. The first approach is to reduce the miss rates of the higher cache levels by utilizing an effective replacement policy that does not replace data that is going to be needed. The second approach is to reduce th...

متن کامل

A cache-based internet protocol address lookup architecture

This paper proposes a novel Internet Protocol (IP) packet forwarding architecture for IP routers. This architecture is comprised of a non-blocking Multizone Pipelined Cache (MPC) and of a hardware-supported IP routing lookup method. The paper also describes a method for expansion-free software lookups. The MPC achieves lower miss rates than those reported in the literature. The MPC uses a two-s...

متن کامل

Design, Implementation, and Analysis of a Reduced Crossbar

The crossbar is the fastest switching architecture available but is the most expensive in terms of hardware cost. The hardware complexity of a crossbar is θ (nw), where n is the number of processors and w is the width of the data path. The reduced crossbar is a new type of switching architecture, which reduces the hardware complexity of a traditional crossbar by a factor of k, where k is the re...

متن کامل

A Multizone Pipelined Cache for IP Routing

Caching recently referenced IP addresses and their forwarding information is an effective strategy to increase routing lookup speed. This paper proposes a multizone non-blocking pipelined cache for IP routing lookup that achieves lower miss rates compared to previously reported IP caches. The twostage pipeline design provides a half-prefix half-full address cache and reduces the cache power con...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • J. Parallel Distrib. Comput.

دوره 73  شماره 

صفحات  -

تاریخ انتشار 2013